Skip-gram Language Modeling Using Sparse Non-negative Matrix Probability Estimation
نویسندگان
چکیده
We present a novel family of language model (LM) estimation techniques named Sparse Non-negative Matrix (SNM) estimation. A first set of experiments empirically evaluating it on the One Billion Word Benchmark [Chelba et al., 2013] shows that SNM n-gram LMs perform almost as well as the well-established Kneser-Ney (KN) models. When using skip-gram features the models are able to match the state-of-the-art recurrent neural network (RNN) LMs; combining the two modeling techniques yields the best known result on the benchmark. The computational advantages of SNM over both maximum entropy and RNN LM estimation are probably its main strength, promising an approach that has the same flexibility in combining arbitrary features effectively and yet should scale to very large amounts of data as gracefully as n-gram LMs do.
منابع مشابه
Sparse non-negative matrix language modeling for skip-grams
We present a novel family of language model (LM) estimation techniques named Sparse Non-negative Matrix (SNM) estimation. A first set of experiments empirically evaluating these techniques on the One Billion Word Benchmark [3] shows that with skip-gram features SNMLMs are able to match the state-of-theart recurrent neural network (RNN) LMs; combining the two modeling techniques yields the best ...
متن کاملSparse Non-negative Matrix Language Modeling
We present Sparse Non-negative Matrix (SNM) estimation, a novel probability estimation technique for language modeling that can efficiently incorporate arbitrary features. We evaluate SNM language models on two corpora: the One Billion Word Benchmark and a subset of the LDC English Gigaword corpus. Results show that SNM language models trained with n-gram features are a close match for the well...
متن کاملMultinomial Loss on Held-out Data for the Sparse Non-negative Matrix Language Model
We describe Sparse Non-negative Matrix (SNM) language model estimation using multinomial loss on held-out data. Being able to train on held-out data is important in practical situations where the training data is usually mismatched from the held-out/test data. It is also less constrained than the previous training algorithm using leave-one-out on training data: it allows the use of richer meta-...
متن کاملHybrid N-gram Probability Estimation in Morphologically Rich Languages
N-gram language modeling is essential in natural language processing and speech processing. In morphologically rich languages such as Korean, a word usually consists of at least one lemma (content morpheme) and functional morphemes which represent various grammatical. Most word forms in Korean, however, have problems of sparse data and zero probability, because of quite complex morpheme combina...
متن کاملAn integrated language modeling with n-gram model and WA model for speech recognition
As to traditional n-gram model, smaller n value is an inherent defect for estimating language probabilities in speech recognition, simply because that estimation could not be executed over farther word association but by means of short sequential word correlated information. This has an strong effect on the performance of speech recognition. This paper introduces an integrated language modeling...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1412.1454 شماره
صفحات -
تاریخ انتشار 2014